52 research outputs found

    Towards Open Temporal Graph Neural Networks

    Full text link
    Graph neural networks (GNNs) for temporal graphs have recently attracted increasing attentions, where a common assumption is that the class set for nodes is closed. However, in real-world scenarios, it often faces the open set problem with the dynamically increased class set as the time passes by. This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes. This case will lead to a sharp contradiction. This is because typical GNNs are prone to make the embeddings of connected nodes become similar, while we expect the embeddings of these two interactive nodes to be distinguishable since they belong to different classes. (ii) How to avoid catastrophic knowledge forgetting over old classes when learning new classes occurred in temporal graphs. In this paper, we propose a general and principled learning approach for open temporal graphs, called OTGNet, with the goal of addressing the above two challenges. We assume the knowledge of a node can be disentangled into class-relevant and class-agnostic one, and thus explore a new message passing mechanism by extending the information bottleneck principle to only propagate class-agnostic knowledge between nodes of different classes, avoiding aggregating conflictive information. Moreover, we devise a strategy to select both important and diverse triad sub-graph structures for effective class-incremental learning. Extensive experiments on three real-world datasets of different domains demonstrate the superiority of our method, compared to the baselines.Comment: ICLR 2023 Ora

    FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks

    Full text link
    Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is actually difficult to train a satisfactory teacher GNN due to the well-known over-parametrized and over-smoothing issues, leading to invalid knowledge transfer in practical applications. In this paper, we propose the first Free-direction Knowledge Distillation framework via Reinforcement learning for GNNs, called FreeKD, which is no longer required to provide a deeper well-optimized teacher GNN. The core idea of our work is to collaboratively build two shallower GNNs in an effort to exchange knowledge between them via reinforcement learning in a hierarchical way. As we observe that one typical GNN model often has better and worse performances at different nodes during training, we devise a dynamic and free-direction knowledge transfer strategy that consists of two levels of actions: 1) node-level action determines the directions of knowledge transfer between the corresponding nodes of two networks; and then 2) structure-level action determines which of the local structures generated by the node-level actions to be propagated. In essence, our FreeKD is a general and principled framework which can be naturally compatible with GNNs of different architectures. Extensive experiments on five benchmark datasets demonstrate our FreeKD outperforms two base GNNs in a large margin, and shows its efficacy to various GNNs. More surprisingly, our FreeKD has comparable or even better performance than traditional KD algorithms that distill knowledge from a deeper and stronger teacher GNN.Comment: Accepted to KDD 202

    Robust Knowledge Adaptation for Dynamic Graph Neural Networks

    Full text link
    Graph structured data often possess dynamic characters in nature, e.g., the addition of links and nodes, in many real-world applications. Recent years have witnessed the increasing attentions paid to dynamic graph neural networks for modelling such graph data, where almost all the existing approaches assume that when a new link is built, the embeddings of the neighbor nodes should be updated by learning the temporal dynamics to propagate new information. However, such approaches suffer from the limitation that if the node introduced by a new connection contains noisy information, propagating its knowledge to other nodes is not reliable and even leads to the collapse of the model. In this paper, we propose AdaNet: a robust knowledge Adaptation framework via reinforcement learning for dynamic graph neural Networks. In contrast to previous approaches immediately updating the embeddings of the neighbor nodes once adding a new link, AdaNet attempts to adaptively determine which nodes should be updated because of the new link involved. Considering that the decision whether to update the embedding of one neighbor node will have great impact on other neighbor nodes, we thus formulate the selection of node update as a sequence decision problem, and address this problem via reinforcement learning. By this means, we can adaptively propagate knowledge to other nodes for learning robust node embedding representations. To the best of our knowledge, our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning for dynamic graph neural networks. Extensive experiments on three benchmark datasets demonstrate that AdaNet achieves the state-of-the-art performance. In addition, we perform the experiments by adding different degrees of noise into the dataset, quantitatively and qualitatively illustrating the robustness of AdaNet.Comment: 14 pages, 6 figure

    Learning to Generate Parameters of ConvNets for Unseen Image Data

    Full text link
    Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: given a ConvNet architecture, we observe there exists correlations between image datasets and their corresponding optimal network parameters, and explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, and then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets on two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65 %. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92 %), more than 2,300 times faster than the traditional training paradigm

    Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition

    Full text link
    While transformers and their variant conformers show promising performance in speech recognition, the parameterized property leads to much memory cost during training and inference. Some works use cross-layer weight-sharing to reduce the parameters of the model. However, the inevitable loss of capacity harms the model performance. To address this issue, this paper proposes a parameter-efficient conformer via sharing sparsely-gated experts. Specifically, we use sparsely-gated mixture-of-experts (MoE) to extend the capacity of a conformer block without increasing computation. Then, the parameters of the grouped conformer blocks are shared so that the number of parameters is reduced. Next, to ensure the shared blocks with the flexibility of adapting representations at different levels, we design the MoE routers and normalization individually. Moreover, we use knowledge distillation to further improve the performance. Experimental results show that the proposed model achieves competitive performance with 1/3 of the parameters of the encoder, compared with the full-parameter model.Comment: accepted in INTERSPEECH 202

    Supported nickel-rhenium catalysts for selective hydrogenation of methyl esters to alcohols

    Get PDF
    The addition of Re to Ni on TiO2 yields efficient catalysts for the hydrogenation of acids and esters to alcohols under mild conditions. Rhenium promotes the formation of atomically dispersed and sub-nanometre-sized bimetallic species interacting strongly with the oxide support

    The transcription factor Sp1 modulates RNA polymerase III gene transcription by controlling BRF1 and GTF3C2 expression in human cells

    Get PDF
    Specificity protein 1 (Sp1) is an important transcription factor implicated in numerous cellular processes. However, whether Sp1 is involved in the regulation of RNA polymerase III (Pol III)directed gene transcription in human cells remains unknown. Here, we first show that filamin A (FLNA) represses Sp1 expression as well as expression of TFIIB-related factor 1 (BRF1) and general transcription factor III C subunit 2 (GTF3C2) in HeLa, 293T, and SaOS2 cell lines stably expressing FLNA-silencing shRNAs. Both BRF1 promoter 4 (BRF1P4) and GTF3C2 promoter 2 (GTF3C2P2) contain putative Sp1-binding sites, suggesting that Sp1 affects Pol III gene transcription by regulating BRF1 and GTF3C2 expression. We demonstrate that Sp1 knockdown inhibits Pol III gene transcription, BRF1 and GTF3C2 expression, and the proliferation of 293T and HeLa cells, whereas Sp1 overexpression enhances these activities. We obtained a comparable result in a cell line in which both FLNA and Sp1 were depleted. These results indicate that Sp1 is involved in the regulation of Pol III gene transcription independently of FLNA expression. Reporter gene assays showed that alteration of Sp1 expression affects BRF1P4 and GTF3C2P2 activation, suggesting that Sp1 modulates Pol III-mediated gene transcription by controlling BRF1 and GTF3C2 gene expression. Further analysis revealed that Sp1 interacts with and thereby promotes the occupancies of TATA box- binding protein, TFIIAα, and p300 at both BRF1P4 and GTF3C2P2. These findings indicate that Sp1 controls Pol III- directed transcription and shed light on how Sp1 regulates cancer cell proliferation
    corecore